Modified Richardson iteration

Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.

We seek the solution to a set of linear equations, expressed in matrix terms as

 A x = b.\,

The Richardson iteration is

 
x^{(k%2B1)}  = x^{(k)} %2B \omega \left( b - A x^{(k)} \right),

where \omega is a scalar parameter that has to be chosen such that the sequence x^{(k)} converges.

It is easy to see that the method is correct, because if it converges, then x^{(k%2B1)} \approx x^{(k)} and x^{(k)} has to approximate a solution of A x = b.

Convergence

Subtracting the exact solution x, and introducing the notation for the error e^{(k)} \approx x^{(k)}-x, we get the equality for the errors

 
e^{(k%2B1)}  = e^{(k)} - \omega A e^{(k)} = (I-\omega A) e^{(k)}.

Thus,

 
\|e^{(k%2B1)}\| = \|(I-\omega A) e^{(k)}\|\leq  \|I-\omega A\| \|e^{(k)}\|,

for any vector norm and the corresponding induced matrix norm. Thus, if \|I-\omega A\|<1 the method convergences.

Suppose that A is diagonalizable and that (\lambda_j, v_j) are the eigenvalues and eigenvectors of A. The error converges to 0 if | 1 - \omega \lambda_j |< 1 for all eigenvalues \lambda_j. If, e.g., all eigenvalues are positive, this can be guaranteed if \omega is chosen such that 0 < \omega < 2/\lambda_{max}(A). The optimal choice, minimizing all | 1 - \omega \lambda_j |, is \omega = 2/(\lambda_{min}(A)%2B\lambda_{max}(A)), which gives the simplest Chebyshev iteration.

If there are both positive and negative eigenvalues, the method will diverge for any \omega if the initial error e^{(0)} has nonzero components in the corresponding eigenvectors.

References